A public forum for people to chat about getting into the marketplace
Permalink
Got questions about how the PRB is going? Ask away.
I think the new PRB changes are really positive and in the right direction. I'm kind of geeking out on seeing one of my ideas make the final cut! A couple of things that weren't addressed in the post, but got my attention...
Once a marketplace item has been flagged "has not been reviewed by the prb". Does the developer need to resubmit said item if they want the flag removed? You said they [the developer] would hit a checkbox to have the item publish. Will there be a "Don't publish this, because I still want it reviewed" box as well? Something to get the attention of the admins, or at least dump it back into the queue?
Is there a window after which items that don't pass the linter will automatically be rejected and removed from the queue?
Once a marketplace item has been flagged "has not been reviewed by the prb". Does the developer need to resubmit said item if they want the flag removed? You said they [the developer] would hit a checkbox to have the item publish. Will there be a "Don't publish this, because I still want it reviewed" box as well? Something to get the attention of the admins, or at least dump it back into the queue?
Is there a window after which items that don't pass the linter will automatically be rejected and removed from the queue?
@justrj Could you ask your question in a new thread please. I'd really love to know the answer to that and it'd have a better chance in its own thread.
I am afraid "has not been approved by the prb" might sound like "has issues" for many buyers.
I am afraid "has not been approved by the prb" might sound like "has issues" for many buyers.
There's some good ideas in the update to the PRB. Here's my 2c:
Should addons with low ratings go through the PRB / linter again to sort of clear them out of the approved market listing? A few of the eCommerce addons were approved at some point, but clearly fail to work (one even had a note from the Core Team added in saying the developer isn't responding and the addon clearly doesn't work)
Should addons with low ratings go through the PRB / linter again to sort of clear them out of the approved market listing? A few of the eCommerce addons were approved at some point, but clearly fail to work (one even had a note from the Core Team added in saying the developer isn't responding and the addon clearly doesn't work)
+1
Any addon updates now go through the Linter. If they fail, it takes a PRB Admin to approve exceptions to the linter tests before the update goes live. However that won't do anything for abandoned addons and won't do anything for addons that pass the linter, but have functional issues.
I just tried to go to PRB (I am already in it). But, I currently don't see a link to it anywhere - is it unavailable?
As I am already a PRB member, will I have to re-apply to be in it?
As I am already a PRB member, will I have to re-apply to be in it?
you can find the info here. I was removed momentarily as well..
http://www.concrete5.org/about/blog/open-source-and-strategy/prb-re...
http://www.concrete5.org/about/blog/open-source-and-strategy/prb-re...
do we have a link/list of linter rules we can prepare for?
c
c
Korvin has posted a linter page
http://www.concrete5.org/marketplace/linter...
I have not tried the page, but you should be able to use that to check packages before uploading to the marketplace or PRB.
http://www.concrete5.org/marketplace/linter...
I have not tried the page, but you should be able to use that to check packages before uploading to the marketplace or PRB.
A heads up for everyone.
The new PRB strategy is aimed at being faster and more responsive. However, to achieve that it will also need to be more brutally direct.
- Red thumbs will be used liberally whenever any observation is made that precludes approval (no more being kind and just leaving a thumb blank pending resolution).
- Green thumbs will only be given when a review heading is good enough for approval.
- The PRB will no longer work as an educational forum. If a developer/submission obviously needs significant assistance, then the submission will be rejected and the developer should use this 'Submit to Marketplace' forum to get whatever help they need before resubmitting. http://www.concrete5.org/community/forums/submitting-to-the-marketp...
- We have an approval by default time limit. Approval by default is in many ways a very second class level of approval. It will be much better for developers to address issues quickly and get real approval.
The new PRB strategy is aimed at being faster and more responsive. However, to achieve that it will also need to be more brutally direct.
- Red thumbs will be used liberally whenever any observation is made that precludes approval (no more being kind and just leaving a thumb blank pending resolution).
- Green thumbs will only be given when a review heading is good enough for approval.
- The PRB will no longer work as an educational forum. If a developer/submission obviously needs significant assistance, then the submission will be rejected and the developer should use this 'Submit to Marketplace' forum to get whatever help they need before resubmitting. http://www.concrete5.org/community/forums/submitting-to-the-marketp...
- We have an approval by default time limit. Approval by default is in many ways a very second class level of approval. It will be much better for developers to address issues quickly and get real approval.
Hey John, thanks for jumping in - my question was this: What is the procedure if the dev doesn't want to make use of the default approval and wants their submission reviewed?
Might that be what the "postpone for 7 days" button is for @JohntheFish? I resisted my temptation to push it prior to you bumping my packages to the MP and now cant see it any more :D
Postpone +7 covers many situations, where a developer doesn't want their submission approved yet, where a PRB admin decides that no matter what, it shouldn't be auto-approved and needs a further human check.
Each click is cumulative!
Following auto approval a submission is currently still accessible in the PRB so can gather further reviews and the seal of approval later.
Each click is cumulative!
Following auto approval a submission is currently still accessible in the PRB so can gather further reviews and the seal of approval later.
That makes it so much clearer, thanks John. Sounds like a good system.
Yes, thank you - that makes total sense.
I noticed the PRB now also contains some add-ons that were pulled from the marketplace. They all seem to have some problems with automatic testing, exec statements missing and the like.
But in at least one case nothing seems wrong at all.
Dojo Page filter didn't fail any test, reviews are all good, it says to be compatible will latest versions...
So I wonder, why was it pulled from the market?
But in at least one case nothing seems wrong at all.
Dojo Page filter didn't fail any test, reviews are all good, it says to be compatible will latest versions...
So I wonder, why was it pulled from the market?
I think that is because PRB Admins now have the power to pull addons from the marketplace, so the pulled list is a way the PRB can interact with the developer to check if a pulled item is then fixed and put it back again.
I think in the case of Dojo Page Filter, it may have been superseded with Dojo Page Filter Pro.
I have sent a PM to Dojo to check.
I think in the case of Dojo Page Filter, it may have been superseded with Dojo Page Filter Pro.
I have sent a PM to Dojo to check.
Just updated one add-on of ours, just few comments on the automatic tests there:
1. Minimum version matches controller.php
This add-on is quite old, it has older versions that support 5.4 as well. However, all new versions of the add-on don't support versions back to 5.4 or older for reasons we all know.
Now this test is failing because the concrete5 add-on page has been marked for the minimum version of 5.4.2 but in the latest version's controller.php we require 5.5.0.
2. 3rd party libraries included in the checks
I'm not sure if it's a good idea to fork each and every 3rdparty library we include in the package. Now these tests are failing because of a included 3rdparty library:
- Don't use json_encode / json_decode functions
- Each PHP file contains exec or die statements
Otherwise, really great we see these test running against updated versions as well! I'm sure this will increase the quality of the add-ons even more. And on the other hand, really good to see that these are not "show stoppers" to get the updates pushed to the customers, e.g. for the reasons I've pointed out above.
Antti / Mainio
1. Minimum version matches controller.php
This add-on is quite old, it has older versions that support 5.4 as well. However, all new versions of the add-on don't support versions back to 5.4 or older for reasons we all know.
Now this test is failing because the concrete5 add-on page has been marked for the minimum version of 5.4.2 but in the latest version's controller.php we require 5.5.0.
2. 3rd party libraries included in the checks
I'm not sure if it's a good idea to fork each and every 3rdparty library we include in the package. Now these tests are failing because of a included 3rdparty library:
- Don't use json_encode / json_decode functions
- Each PHP file contains exec or die statements
Otherwise, really great we see these test running against updated versions as well! I'm sure this will increase the quality of the add-ons even more. And on the other hand, really good to see that these are not "show stoppers" to get the updates pushed to the customers, e.g. for the reasons I've pointed out above.
Antti / Mainio
And yet more that I forgot to mention in my previous posts:
Really great to see the new PRB policy! I'm sure this will get the marketplace growing more quickly and I hope this will have some effect on the quality as well as the PRB expects people to know this stuff prior to submitting and the PRB no longer acts as an educational board.
On the other hand, it would be great if there was a PRB-like reviewing system for novices so that they could get their code right and learn how to build better concrete5 packages. I know there is already this forum but if you'd like to share code here, it would be public for everyone.
By the way, where can we get the code of linter to run the tests ourselves? I believe it was in the blog post by @frz that you've open sourced the code...? I see this repository at github:
https://github.com/concrete5/prb_lint_tests...
But I don't see all the tests there. (or am I just going blind?)
Really great to see the new PRB policy! I'm sure this will get the marketplace growing more quickly and I hope this will have some effect on the quality as well as the PRB expects people to know this stuff prior to submitting and the PRB no longer acts as an educational board.
On the other hand, it would be great if there was a PRB-like reviewing system for novices so that they could get their code right and learn how to build better concrete5 packages. I know there is already this forum but if you'd like to share code here, it would be public for everyone.
By the way, where can we get the code of linter to run the tests ourselves? I believe it was in the blog post by @frz that you've open sourced the code...? I see this repository at github:
https://github.com/concrete5/prb_lint_tests...
But I don't see all the tests there. (or am I just going blind?)
The github is so developers can submit more plugin tests. The details of most tests are kept under wraps.
If you post me a link to the addon, I can approve linter exceptions, so the new version will become downloadable. Exceptions are sticky, so once you have them for an addon, you shouldn't have problems in the future.
Developers can test against the linter as per
http://www.concrete5.org/community/forums/submitting-to-the-marketp...
If you post me a link to the addon, I can approve linter exceptions, so the new version will become downloadable. Exceptions are sticky, so once you have them for an addon, you shouldn't have problems in the future.
Developers can test against the linter as per
http://www.concrete5.org/community/forums/submitting-to-the-marketp...
OK, that's sad that they are kept under a blanket... :( I think there isn't anything people could "abuse" in these tests, so I think it would be great if they were public.
And do we really need to get the updates re-approved? I might need to take some of my praising words back if that's the case...
I'll send you the link.
And do we really need to get the updates re-approved? I might need to take some of my praising words back if that's the case...
I'll send you the link.
I wasn't a great fan of that, but it does make some sense. There has been criticism in the past that addons could diverge from standards as much as they like once approved.
Hopefully once a few exemptions are checked the overhead of approving updates will disappear.
Hopefully once a few exemptions are checked the overhead of approving updates will disappear.
In fact, I think this is a downgrade from the previous situation if we have a middle-man that needs to take action e.g. if we need to get a critical update pushed to the customers. And I'm not saying any add-ons that need critical updates should be in the listing in the first place but e.g. when the core is updated, there might be some edge scenarios that we're not noticing in our own environments but some customers are noticing.
E.g. some are still using PHP 5.2 because the core still works with 5.2 (fortunately, not for long) but we only test our add-ons with PHP 5.3+.
E.g. some are still using PHP 5.2 because the core still works with 5.2 (fortunately, not for long) but we only test our add-ons with PHP 5.3+.
One more:
If you didn't mention it and if I didn't post to this thread, I actually would've though (like I did) that the update has been automatically pushed to the customers. There's nothing telling me that these are show stoppers if the add-on is perfectly valid but the tests mentioned above did not pass because of reasons out of the add-on's scope.
If you didn't mention it and if I didn't post to this thread, I actually would've though (like I did) that the update has been automatically pushed to the customers. There's nothing telling me that these are show stoppers if the add-on is perfectly valid but the tests mentioned above did not pass because of reasons out of the add-on's scope.
Please PM Franz on that. He needs to get the message from more than just me.
When I get a few minutes, I will be posting a minor update for a few of my addons to get the exceptions marked and cleared now rather than waiting until its critical.
But e.g. in the situation I demonstrated, it will not help because these errors will still be there for all future versions (because of reasons posted above).
Exceptions are sticky. Once marked, they hold for all future versions of an addon.
OK, thanks for that info. Helps me understand it a bit more.
But on the other hand, I'm not sure if it helps the underlying problem any more than the previous situation. Now, e.g. in this particular case if you marked the "Each PHP file contains exec or die statements" to be bypassed as sticky (for all future versions), what prevents us from adding another file without the die statement to the package? And in that case, the PRB standards would no longer be followed, which makes the check basically useless if the goal has been to make sure the add-ons don't diverge from the standards.
I like that I can see if any tests fail when I update the add-on (although I would like to run these tests already on my local machine if they were in the github repo) but I don't think the middle-man process brings anything to the table if the add-on has already been approved once.
I agree that it's a bit problematic to bypass the middle man if someone updates the add-on to "totally revamped and more awesome v2.0" where everything has changed but in minor updates / bug fixes, I think this is a total waste of everyone's time and completely unnecessary. And as pointed above, does not add any more control over following the PRB standards.
I also PMd frz with the points I raised here.
But on the other hand, I'm not sure if it helps the underlying problem any more than the previous situation. Now, e.g. in this particular case if you marked the "Each PHP file contains exec or die statements" to be bypassed as sticky (for all future versions), what prevents us from adding another file without the die statement to the package? And in that case, the PRB standards would no longer be followed, which makes the check basically useless if the goal has been to make sure the add-ons don't diverge from the standards.
I like that I can see if any tests fail when I update the add-on (although I would like to run these tests already on my local machine if they were in the github repo) but I don't think the middle-man process brings anything to the table if the add-on has already been approved once.
I agree that it's a bit problematic to bypass the middle man if someone updates the add-on to "totally revamped and more awesome v2.0" where everything has changed but in minor updates / bug fixes, I think this is a total waste of everyone's time and completely unnecessary. And as pointed above, does not add any more control over following the PRB standards.
I also PMd frz with the points I raised here.
we will make sure there's clearer messaging on what to do if an update fails tests.
Running updates through the same process that new submissions go through makes a lot of sense to us. Historically there was nothing keeping someone from submitting an add-on that was "hello world" at version one and "rm *.*" on version 2. To a customer coming to the marketplace getting that add-on at version 2, we've totally failed on our basic promise of 'it works and wont destroy your site' completely. They wouldn't care that some version ages ago was tested.
As John pointed out, exceptions are sticky, so while we do expect some initial influx of traffic as things get grandfathered in, in the big picture we see this as adding a lot of value to both customers and developers.
There is a linter you can use to test. We didn't open source the existing tests because the last thing we need is a bunch of bright college students proving that our tests are imperfect in some way. We did open source the testing framework, because we'd love bright college students to think up additional tests and send them our way.
Hope that helps clarify.
Running updates through the same process that new submissions go through makes a lot of sense to us. Historically there was nothing keeping someone from submitting an add-on that was "hello world" at version one and "rm *.*" on version 2. To a customer coming to the marketplace getting that add-on at version 2, we've totally failed on our basic promise of 'it works and wont destroy your site' completely. They wouldn't care that some version ages ago was tested.
As John pointed out, exceptions are sticky, so while we do expect some initial influx of traffic as things get grandfathered in, in the big picture we see this as adding a lot of value to both customers and developers.
There is a linter you can use to test. We didn't open source the existing tests because the last thing we need is a bunch of bright college students proving that our tests are imperfect in some way. We did open source the testing framework, because we'd love bright college students to think up additional tests and send them our way.
Hope that helps clarify.
I can swallow (with a grin) the fact that each and every little bug fix needs to go through the approval process if the tests haven't been marked to be bypassed earlier, that's fine. At least the tests need to be marked only once. And I agree (as I pointed out earlier) that the whole new process enhances the quality, no doubt about it.
But I still don't understand the Microsoft-way of thinking behind opening up the tests if you've opened up the framework itself. I could understand this point of view if the whole linter was still closed source. There's so little software that's perfect so why should you worry about that? And if there are some actual ISSUES in the tests, I'm sure they will be fixed much quicker if they are open source.
And yes, I know there's the linter tool at c5.org that has been mentioned a couple of times but it would just make things easier if the tests could be run locally, as it would automate part of the add-on's publishing process. I'm sure you've yourselves have had at least some benefit of having the CI tests in place at the c5 github repo, right?
But I still don't understand the Microsoft-way of thinking behind opening up the tests if you've opened up the framework itself. I could understand this point of view if the whole linter was still closed source. There's so little software that's perfect so why should you worry about that? And if there are some actual ISSUES in the tests, I'm sure they will be fixed much quicker if they are open source.
And yes, I know there's the linter tool at c5.org that has been mentioned a couple of times but it would just make things easier if the tests could be run locally, as it would automate part of the add-on's publishing process. I'm sure you've yourselves have had at least some benefit of having the CI tests in place at the c5 github repo, right?
*disregard, it's working now.